翻訳と辞書
Words near each other
・ Adventus (ceremony)
・ Adventus Interactive
・ Advenza Freight
・ Adverb
・ Adverbial
・ Adverbial case
・ Adverbial clause
・ Adverbial complement
・ Adverbial genitive
・ Adverbial phrase
・ Adverbs (novel)
・ Adversane
・ Adversarial collaboration
・ Adversarial Design
・ Adversarial information retrieval
Adversarial machine learning
・ Adversarial process
・ Adversarial purchasing
・ Adversarial queueing network
・ Adversarial review
・ Adversarial system
・ Adversary
・ Adversary (comics)
・ Adversary (cryptography)
・ Adversary evaluation
・ Adversary in the House
・ Adversary model
・ Adversary proceeding in bankruptcy (United States)
・ Adverse
・ Adverse abandonment


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Adversarial machine learning : ウィキペディア英語版
Adversarial machine learning
Adversarial machine learning is a research field that lies at the intersection of machine learning and computer security. It aims to enable the safe adoption of machine learning techniques in adversarial settings like spam filtering, malware detection and biometric recognition.
The problem arises from the fact that machine learning techniques were originally designed for stationary environments in which the training and test data are assumed to be generated from the same (although possibly unknown) distribution. In the presence of intelligent and adaptive adversaries, however, this working hypothesis is likely to be violated to at least some degree (depending on the adversary). In fact, a malicious adversary can carefully manipulate the input data exploiting specific vulnerabilities of learning algorithms to compromise the whole system security.
Examples include: attacks in spam filtering, where spam messages are obfuscated through misspelling of bad words or insertion of good words;〔N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma. “Adversarial classification”. In Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 99–108, Seattle, 2004.〕〔D. Lowd and C. Meek. “(Adversarial learning )”. In A. Press, editor, Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 641–647, Chicago, IL., 2005.〕〔B. Biggio, I. Corona, G. Fumera, G. Giacinto, and F. Roli. “(Bagging classifiers for fighting poisoning attacks in adversarial classification tasks )”. In C. Sansone, J. Kittler, and F. Roli, editors, 10th International Workshop on Multiple Classifier Systems (MCS), volume 6713 of Lecture Notes in Computer Science, pages 350–359. Springer-Verlag, 2011.〕〔B. Biggio, G. Fumera, and F. Roli. “(Adversarial pattern classification using multiple classifiers and randomisation )”. In 12th Joint IAPR International Workshop on Structural and Syntactic Pattern Recognition (SSPR 2008), volume 5342 of Lecture Notes in Computer Science, pages 500–509, Orlando, Florida, USA, 2008. Springer-Verlag.〕〔B. Biggio, G. Fumera, and F. Roli. “(Multiple classifier systems for robust classifier design in adversarial environments )”. International Journal of Machine Learning and Cybernetics, 1(1):27–41, 2010.〕〔M. Bruckner, C. Kanzow, and T. Scheffer. “Static prediction games for adversarial learning problems”. J. Mach. Learn. Res., 13:2617–2654, 2012.〕〔M. Bruckner and T. Scheffer. “Nash equilibria of static prediction games”. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 171–179. 2009.〕〔M. Bruckner and T. Scheffer. "Stackelberg games for adversarial prediction problems". In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’11, pages 547–555, New York, NY, USA, 2011. ACM.〕〔A. Globerson and S. T. Roweis. “Nightmare at test time: robust learning by feature deletion”. In W. W. Cohen and A. Moore, editors, Proceedings of the 23rd International Conference on Machine Learning, volume 148, pages 353–360. ACM, 2006.〕〔A. Kolcz and C. H. Teo. “Feature weighting for improved classifier robustness”. In Sixth Conference on Email and Anti-Spam (CEAS), Mountain View, CA, USA, 2009.〕〔B. Nelson, M. Barreno, F. J. Chi, A. D. Joseph, B. I. P. Rubinstein, U. Saini, C. Sutton, J. D. Tygar, and K. Xia. “Exploiting machine learning to subvert your spam filter”. In LEET’08: Proceedings of the 1st Usenix Workshop on Large-Scale Exploits and Emergent Threats, pages 1–9, Berkeley, CA, USA, 2008. USENIX Association.〕〔G. L. Wittel and S. F. Wu. “On attacking statistical spam filters”. In First Conference on Email and Anti-Spam (CEAS), Microsoft Research Silicon Valley, Mountain View, California, 2004.〕 attacks in computer security, e.g., to obfuscate malware code within network packets 〔P. Fogla, M. Sharif, R. Perdisci, O. Kolesnikov, and W. Lee. Polymorphic blending attacks. In USENIX- SS’06: Proc. of the 15th Conf. on USENIX Security Symp., CA, USA, 2006. USENIX Association.〕 or mislead signature detection;〔J. Newsome, B. Karp, and D. Song. Paragraph: Thwarting signature learning by training maliciously. In Recent Advances in Intrusion Detection, LNCS, pages 81–105. Springer, 2006.〕 attacks in biometric recognition, where fake biometric traits may be exploited to impersonate a legitimate user (biometric spoofing) 〔R. N. Rodrigues, L. L. Ling, and V. Govindaraju. "Robustness of multimodal biometric fusion methods against spoof attacks". J. Vis. Lang. Comput., 20(3):169–179, 2009.〕 or to compromise users’ template galleries that are adaptively updated over time.〔B. Biggio, L. Didaci, G. Fumera, and F. Roli. “(Poisoning attacks to compromise face templates )”. In 6th IAPR Int’l Conf. on Biometrics (ICB 2013), pages 1–7, Madrid, Spain, 2013.〕 〔M. Torkamani and D. Lowd “(Convex Adversarial Collective Classification )”. In Proceedings of The 30th International Conference on Machine Learning (pp. 642-650), Atlanta, GA., 2013.〕
==Security evaluation==

To understand the security properties of learning algorithms in adversarial settings, one should address the following main issues:〔〔L. Huang, A. D. Joseph, B. Nelson, B. Rubinstein, and J. D. Tygar. “Adversarial machine learning”. In 4th ACM Workshop on Artificial Intelligence and Security (AISec 2011), pages 43–57, Chicago, IL, USA, October 2011.〕〔B. Biggio, I. Corona, B. Nelson, B. Rubinstein, D. Maiorca, G. Fumera, G. Giacinto, and F. Roli. “(Security evaluation of support vector machines in adversarial environments )”. In Y. Ma and G. Guo, editors, Support Vector Machines Applications, pp. 105–153. Springer, 2014.〕〔B. Biggio, G. Fumera, and F. Roli. “(Pattern recognition systems under attack: Design issues and research challenges )”. Int’l J. Patt. Recogn. Artif. Intell., 28(7):1460002, 2014.〕
* identifying potential vulnerabilities of machine learning algorithms during learning and classification;
* devising appropriate attacks that correspond to the identified threats and evaluating their impact on the targeted system;
* proposing countermeasures to improve the security of machine learning algorithms against the considered attacks.
This process amounts to simulating a proactive arms race (instead of a reactive one, as depicted in Figures 1 and 2, where system designers try to anticipate the adversary in order to understand whether there are potential vulnerabilities that should be fixed in advance; for instance, by means of specific countermeasures such as additional features or different learning algorithms. However proactive approaches are not necessarily be superior to reactive ones. For instance, in,〔A. Barth, B. I. P. Rubinstein, M. Sundararajan, J. C. Mitchell, D. Song, and P. L. Bartlett. "A learning-based approach to reactive security. IEEE Transactions on Dependable and Secure Computing", 9(4):482–493, 2012.〕 the authors showed that under some circumstances, reactive approaches are more suitable for improving system security.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Adversarial machine learning」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.